Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Comput Neurosci ; 16: 1057439, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36618270

RESUMO

Introduction: In recent years, machines powered by deep learning have achieved near-human levels of performance in speech recognition. The fields of artificial intelligence and cognitive neuroscience have finally reached a similar level of performance, despite their huge differences in implementation, and so deep learning models can-in principle-serve as candidates for mechanistic models of the human auditory system. Methods: Utilizing high-performance automatic speech recognition systems, and advanced non-invasive human neuroimaging technology such as magnetoencephalography and multivariate pattern-information analysis, the current study aimed to relate machine-learned representations of speech to recorded human brain representations of the same speech. Results: In one direction, we found a quasi-hierarchical functional organization in human auditory cortex qualitatively matched with the hidden layers of deep artificial neural networks trained as part of an automatic speech recognizer. In the reverse direction, we modified the hidden layer organization of the artificial neural network based on neural activation patterns in human brains. The result was a substantial improvement in word recognition accuracy and learned speech representations. Discussion: We have demonstrated that artificial and brain neural networks can be mutually informative in the domain of speech recognition.

2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 4341-4347, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892182

RESUMO

Modern sequencing technology has produced a vast quantity of proteomic data, which has been key to the development of various deep learning models within the field. However, there are still challenges to overcome with regards to modelling the properties of a protein, especially when labelled resources are scarce. Developing interpretable deep learning models is an essential criterion, as proteomics research requires methods to understand the functional properties of proteins. The ability to derive quality information from both the model and the data will play a vital role in the advancement of proteomics research. In this paper, we seek to leverage a BERT model that has been pre-trained on a vast quantity of proteomic data, to model a collection of regression tasks using only a minimal amount of data. We adopt a triplet network structure to fine-tune the BERT model for each dataset and evaluate its performance on a set of downstream task predictions: plasma membrane localisation, thermostability, peak absorption wavelength, and enantioselectivity. Our results significantly improve upon the original BERT baseline as well as the previous state-of-the-art models for each task, demonstrating the benefits of using a triplet network for refining such a large pre-trained model on a limited dataset. As a form of white-box deep learning, we also visualise how the model attends to specific parts of the protein and how the model detects critical modifications that change its overall function.


Assuntos
Aprendizado Profundo , Algoritmos , Proteínas , Proteômica
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 4348-4353, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892183

RESUMO

Understanding the interactions between novel drugs and target proteins is fundamentally important in disease research as discovering drug-protein interactions can be an exceptionally time-consuming and expensive process. Alternatively, this process can be simulated using modern deep learning methods that have the potential of utilising vast quantities of data to reduce the cost and time required to provide accurate predictions. We seek to leverage a set of BERT-style models that have been pre-trained on vast quantities of both protein and drug data. The encodings produced by each model are then utilised as node representations for a graph convolutional neural network, which in turn are used to model the interactions without the need to simultaneously fine-tune both protein and drug BERT models to the task. We evaluate the performance of our approach on two drug-target interaction datasets that were previously used as benchmarks in recent work.Our results significantly improve upon a vanilla BERT baseline approach as well as the former state-of-the-art methods for each task dataset. Our approach builds upon past work in two key areas; firstly, we take full advantage of two large pre-trained BERT models that provide improved representations of task-relevant properties of both drugs and proteins. Secondly, inspired by work in natural language processing that investigates how linguistic structure is represented in such models, we perform interpretability analyses that allow us to locate functionally-relevant areas of interest within each drug and protein. By modelling the drug-target interactions as a graph as opposed to a set of isolated interactions, we demonstrate the benefits of combining large pre-trained models and a graph neural network to make state-of-the-art predictions on drug-target binding affinity.


Assuntos
Redes Neurais de Computação , Preparações Farmacêuticas , Processamento de Linguagem Natural
4.
J Exp Psychol Learn Mem Cogn ; 47(12): 1903-1923, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34472918

RESUMO

People often misrecognize objects that are similar to those they have previously encountered. These mnemonic discrimination errors are attributed to shared memory representations (gist) typically characterized in terms of meaning. In two experiments, we investigated multiple semantic and perceptual relations that may contribute: at the concept level, a feature-based measure of concept confusability quantified each concept's tendency to activate other similar concepts via shared features; at the item level, rated item exemplarity indexed the degree to which the specific depicted objects activated their particular concepts. We also measured visual confusability over items using a computational model of vision, and an index of color confusability. Participants studied single (Experiment 1, N = 60) or multiple (Experiment 2, N = 60) objects for each basic-level concept, followed by a recognition memory test including studied items, similar lures, and novel items. People were less likely to recognize studied items with high concept confusability, and less likely to falsely recognize their lures. This points to weaker basic-level semantic gist representations for objects with more confusable concepts because of greater emphasis on coarse processing of shared features relative to fine-grained processing of individual concepts. In contrast, people were more likely to misrecognize lures that were better exemplars of their concept, suggesting that enhanced basic-level semantic gist processing increased errors due to gist across items. False recognition was also more frequent for more visually confusable lures. The results implicate semantic similarity at multiple levels and highlight the importance of perceptual as well as semantic relations. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Reconhecimento Psicológico , Semântica , Cognição , Humanos , Memória
5.
J Neurosci ; 41(40): 8375-8389, 2021 10 06.
Artigo em Inglês | MEDLINE | ID: mdl-34413205

RESUMO

When encoding new episodic memories, visual and semantic processing is proposed to make distinct contributions to accurate memory and memory distortions. Here, we used fMRI and preregistered representational similarity analysis to uncover the representations that predict true and false recognition of unfamiliar objects. Two semantic models captured coarse-grained taxonomic categories and specific object features, respectively, while two perceptual models embodied low-level visual properties. Twenty-eight female and male participants encoded images of objects during fMRI scanning, and later had to discriminate studied objects from similar lures and novel objects in a recognition memory test. Both perceptual and semantic models predicted true memory. When studied objects were later identified correctly, neural patterns corresponded to low-level visual representations of these object images in the early visual cortex, lingual, and fusiform gyri. In a similar fashion, alignment of neural patterns with fine-grained semantic feature representations in the fusiform gyrus also predicted true recognition. However, emphasis on coarser taxonomic representations predicted forgetting more anteriorly in the anterior ventral temporal cortex, left inferior frontal gyrus and, in an exploratory analysis, left perirhinal cortex. In contrast, false recognition of similar lure objects was associated with weaker visual analysis posteriorly in early visual and left occipitotemporal cortex. The results implicate multiple perceptual and semantic representations in successful memory encoding and suggest that fine-grained semantic as well as visual analysis contributes to accurate later recognition, while processing visual image detail is critical for avoiding false recognition errors.SIGNIFICANCE STATEMENT People are able to store detailed memories of many similar objects. We offer new insights into the encoding of these specific memories by combining fMRI with explicit models of how image properties and object knowledge are represented in the brain. When people processed fine-grained visual properties in occipital and posterior temporal cortex, they were more likely to recognize the objects later and less likely to falsely recognize similar objects. In contrast, while object-specific feature representations in fusiform gyrus predicted accurate memory, coarse-grained categorical representations in frontal and temporal regions predicted forgetting. The data provide the first direct tests of theoretical assumptions about encoding true and false memories, suggesting that semantic representations contribute to specific memories as well as errors.


Assuntos
Encéfalo/fisiologia , Memória/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa/métodos , Reconhecimento Psicológico/fisiologia , Semântica , Adolescente , Adulto , Encéfalo/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Adulto Jovem
6.
JMIR Med Inform ; 9(5): e23099, 2021 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-34037527

RESUMO

BACKGROUND: Semantic textual similarity (STS) is a natural language processing (NLP) task that involves assigning a similarity score to 2 snippets of text based on their meaning. This task is particularly difficult in the domain of clinical text, which often features specialized language and the frequent use of abbreviations. OBJECTIVE: We created an NLP system to predict similarity scores for sentence pairs as part of the Clinical Semantic Textual Similarity track in the 2019 n2c2/OHNLP Shared Task on Challenges in Natural Language Processing for Clinical Data. We subsequently sought to analyze the intermediary token vectors extracted from our models while processing a pair of clinical sentences to identify where and how representations of semantic similarity are built in transformer models. METHODS: Given a clinical sentence pair, we take the average predicted similarity score across several independently fine-tuned transformers. In our model analysis we investigated the relationship between the final model's loss and surface features of the sentence pairs and assessed the decodability and representational similarity of the token vectors generated by each model. RESULTS: Our model achieved a correlation of 0.87 with the ground-truth similarity score, reaching 6th place out of 33 teams (with a first-place score of 0.90). In detailed qualitative and quantitative analyses of the model's loss, we identified the system's failure to correctly model semantic similarity when both sentence pairs contain details of medical prescriptions, as well as its general tendency to overpredict semantic similarity given significant token overlap. The token vector analysis revealed divergent representational strategies for predicting textual similarity between bidirectional encoder representations from transformers (BERT)-style models and XLNet. We also found that a large amount information relevant to predicting STS can be captured using a combination of a classification token and the cosine distance between sentence-pair representations in the first layer of a transformer model that did not produce the best predictions on the test set. CONCLUSIONS: We designed and trained a system that uses state-of-the-art NLP models to achieve very competitive results on a new clinical STS data set. As our approach uses no hand-crafted rules, it serves as a strong deep learning baseline for this task. Our key contribution is a detailed analysis of the model's outputs and an investigation of the heuristic biases learned by transformer models. We suggest future improvements based on these findings. In our representational analysis we explore how different transformer models converge or diverge in their representation of semantic signals as the tokens of the sentences are augmented by successive layers. This analysis sheds light on how these "black box" models integrate semantic similarity information in intermediate layers, and points to new research directions in model distillation and sentence embedding extraction for applications in clinical NLP.

7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 2361-2367, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018481

RESUMO

Deep learning has proven to be a useful tool for modelling protein properties. However, given the variability in the length of proteins, it can be difficult to summarise the sequence of amino acids effectively. In many cases, as a result of using fixed-length representations, information about long proteins can be lost through truncation, or model training can be slow due to the use of excessive padding. In this work, we aim to overcome these problems by expanding upon the original vocabulary used to represent the protein sequence. To this end, we utilise two prominent subword algorithms that have been previously used to reach state-of-the-art results in various Natural Language Processing tasks. The algorithms are used to encode the original protein sequence into a set of subsequences before they are analysed by a Doc2Vec model. The pre-trained encodings produced by each algorithm are tested on a variety of downstream tasks: four protein property prediction tasks (plasma membrane localization, thermostability, peak absorption wavelength, enantioselectivity) as well as drug-target affinity prediction tasks over two datasets. Our results significantly improve on the state-of-the-art for these tasks, demonstrating the benefits of using subword compression algorithms for modelling proteins.


Assuntos
Algoritmos , Vocabulário , Sequência de Aminoácidos , Processamento de Linguagem Natural , Proteínas
8.
Cogn Process ; 21(4): 583-586, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33063246

RESUMO

Asking subjects to list semantic properties for concepts is essential for predicting performance in several linguistic and non-linguistic tasks and for creating carefully controlled stimuli for experiments. The property elicitation task and the ensuing norms are widely used across the field, to investigate the organization of semantic memory and design computational models thereof. The contributions of the current Special Topic discuss several core issues concerning how semantic property norms are constructed and how they may be used for research aiming at understanding cognitive processing.


Assuntos
Linguística , Semântica , Compreensão , Humanos , Memória
9.
JAMIA Open ; 3(2): 173-177, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32734156

RESUMO

Phenotypes are the result of the complex interplay between environmental and genetic factors. To better understand the interactions between chemical compounds and human phenotypes, and further exposome research we have developed "phexpo," a tool to perform and explore bidirectional chemical and phenotype interactions using enrichment analyses. Phexpo utilizes gene annotations from 2 curated public repositories, the Comparative Toxicogenomics Database and the Human Phenotype Ontology. We have applied phexpo in 3 case studies linking: (1) individual chemicals (a drug, warfarin, and an industrial chemical, chloroform) with phenotypes, (2) individual phenotypes (left ventricular dysfunction) with chemicals, and (3) multiple phenotypes (covering polycystic ovary syndrome) with chemicals. The results of these analyses demonstrated successful identification of relevant chemicals or phenotypes supported by bibliographic references. The phexpo R package (https://github.com/GHLCLab/phexpo) provides a new bidirectional analyses approach covering relationships from chemicals to phenotypes and from phenotypes to chemicals.

10.
PLoS One ; 14(9): e0214342, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31525201

RESUMO

Brain decoding-the process of inferring a person's momentary cognitive state from their brain activity-has enormous potential in the field of human-computer interaction. In this study we propose a zero-shot EEG-to-image brain decoding approach which makes use of state-of-the-art EEG preprocessing and feature selection methods, and which maps EEG activity to biologically inspired computer vision and linguistic models. We apply this approach to solve the problem of identifying viewed images from recorded brain activity in a reliable and scalable way. We demonstrate competitive decoding accuracies across two EEG datasets, using a zero-shot learning framework more applicable to real-world image retrieval than traditional classification techniques.


Assuntos
Interfaces Cérebro-Computador , Aprendizado de Máquina , Percepção Visual , Encéfalo/fisiologia , Eletroencefalografia/métodos , Humanos
11.
J Neurosci ; 39(3): 519-527, 2019 01 16.
Artigo em Inglês | MEDLINE | ID: mdl-30459221

RESUMO

Spoken word recognition in context is remarkably fast and accurate, with recognition times of ∼200 ms, typically well before the end of the word. The neurocomputational mechanisms underlying these contextual effects are still poorly understood. This study combines source-localized electroencephalographic and magnetoencephalographic (EMEG) measures of real-time brain activity with multivariate representational similarity analysis to determine directly the timing and computational content of the processes evoked as spoken words are heard in context, and to evaluate the respective roles of bottom-up and predictive processing mechanisms in the integration of sensory and contextual constraints. Male and female human participants heard simple (modifier-noun) English phrases that varied in the degree of semantic constraint that the modifier (W1) exerted on the noun (W2), as in pairs, such as "yellow banana." We used gating tasks to generate estimates of the probabilistic predictions generated by these constraints as well as measures of their interaction with the bottom-up perceptual input for W2. Representation similarity analysis models of these measures were tested against electroencephalographic and magnetoencephalographic brain data across a bilateral fronto-temporo-parietal language network. Consistent with probabilistic predictive processing accounts, we found early activation of semantic constraints in frontal cortex (LBA45) as W1 was heard. The effects of these constraints (at 100 ms after W2 onset in left middle temporal gyrus and at 140 ms in left Heschl's gyrus) were only detectable, however, after the initial phonemes of W2 had been heard. Within an overall predictive processing framework, bottom-up sensory inputs are still required to achieve early and robust spoken word recognition in context.SIGNIFICANCE STATEMENT Human listeners recognize spoken words in natural speech contexts with remarkable speed and accuracy, often identifying a word well before all of it has been heard. In this study, we investigate the brain systems that support this important capacity, using neuroimaging techniques that can track real-time brain activity during speech comprehension. This makes it possible to locate the brain areas that generate predictions about upcoming words and to show how these expectations are integrated with the evidence provided by the speech being heard. We use the timing and localization of these effects to provide the most specific account to date of how the brain achieves an optimal balance between prediction and sensory input in the interpretation of spoken language.


Assuntos
Antecipação Psicológica/fisiologia , Compreensão/fisiologia , Reconhecimento Psicológico/fisiologia , Sensação/fisiologia , Percepção da Fala/fisiologia , Animais , Encéfalo/fisiologia , Eletroencefalografia , Entropia , Feminino , Imageamento por Ressonância Magnética , Magnetoencefalografia , Masculino , Rede Nervosa/fisiologia , Neuroimagem , Córtex Pré-Frontal/fisiologia , Ratos , Semântica , Filtro Sensorial/fisiologia
12.
J Cogn Neurosci ; 30(11): 1590-1605, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30125217

RESUMO

Object recognition requires dynamic transformations of low-level visual inputs to complex semantic representations. Although this process depends on the ventral visual pathway, we lack an incremental account from low-level inputs to semantic representations and the mechanistic details of these dynamics. Here we combine computational models of vision with semantics and test the output of the incremental model against patterns of neural oscillations recorded with magnetoencephalography in humans. Representational similarity analysis showed visual information was represented in low-frequency activity throughout the ventral visual pathway, and semantic information was represented in theta activity. Furthermore, directed connectivity showed visual information travels through feedforward connections, whereas visual information is transformed into semantic representations through feedforward and feedback activity, centered on the anterior temporal lobe. Our research highlights that the complex transformations between visual and semantic information is driven by feedforward and recurrent dynamics resulting in object-specific semantics.


Assuntos
Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa/métodos , Semântica , Ritmo Teta/fisiologia , Vias Visuais/diagnóstico por imagem , Vias Visuais/fisiologia , Biologia Computacional/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Magnetoencefalografia/métodos , Masculino , Distribuição Aleatória
13.
Sci Rep ; 8(1): 10636, 2018 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-30006530

RESUMO

Recognising an object involves rapid visual processing and activation of semantic knowledge about the object, but how visual processing activates and interacts with semantic representations remains unclear. Cognitive neuroscience research has shown that while visual processing involves posterior regions along the ventral stream, object meaning involves more anterior regions, especially perirhinal cortex. Here we investigate visuo-semantic processing by combining a deep neural network model of vision with an attractor network model of semantics, such that visual information maps onto object meanings represented as activation patterns across features. In the combined model, concept activation is driven by visual input and co-occurrence of semantic features, consistent with neurocognitive accounts. We tested the model's ability to explain fMRI data where participants named objects. Visual layers explained activation patterns in early visual cortex, whereas pattern-information in perirhinal cortex was best explained by later stages of the attractor network, when detailed semantic representations are activated. Posterior ventral temporal cortex was best explained by intermediate stages corresponding to initial semantic processing, when visual information has the greatest influence on the emerging semantic representation. These results provide proof of principle of how a mechanistic model of combined visuo-semantic processing can account for pattern-information in the ventral stream.


Assuntos
Modelos Neurológicos , Reconhecimento Visual de Modelos/fisiologia , Córtex Perirrinal/fisiologia , Córtex Visual/fisiologia , Vias Visuais/fisiologia , Mapeamento Encefálico/métodos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Córtex Perirrinal/diagnóstico por imagem , Semântica , Córtex Visual/diagnóstico por imagem , Vias Visuais/diagnóstico por imagem
14.
Lang Cogn Neurosci ; 32(2): 221-235, 2017 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-28164141

RESUMO

As spoken language unfolds over time the speech input transiently activates multiple candidates at different levels of the system - phonological, lexical, and syntactic - which in turn leads to short-lived between-candidate competition. In an fMRI study, we investigated how different kinds of linguistic competition may be modulated by the presence or absence of a prior context (Tyler 1984; Tyler et al. 2008). We found significant effects of lexico-phonological competition for isolated words, but not for words in short phrases, with high competition yielding greater activation in left inferior frontal gyrus (LIFG) and posterior temporal regions. This suggests that phrasal contexts reduce lexico-phonological competition by eliminating form-class inconsistent cohort candidates. A corpus-derived measure of lexico-syntactic competition was associated with greater activation in LIFG for verbs in phrases, but not for isolated verbs, indicating that lexico-syntactic information is boosted by the phrasal context. Together, these findings indicate that LIFG plays a general role in resolving different kinds of linguistic competition.

15.
J Neurosci ; 37(5): 1312-1319, 2017 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-28028201

RESUMO

Comprehending speech involves the rapid and optimally efficient mapping from sound to meaning. Influential cognitive models of spoken word recognition (Marslen-Wilson and Welsh, 1978) propose that the onset of a spoken word initiates a continuous process of activation of the lexical and semantic properties of the word candidates matching the speech input and competition between them, which continues until the point at which the word is differentiated from all other cohort candidates (the uniqueness point, UP). At this point, the word is recognized uniquely and only the target word's semantics are active. Although it is well established that spoken word recognition engages the superior (Rauschecker and Scott, 2009), middle, and inferior (Hickok and Poeppel, 2007) temporal cortices, little is known about the real-time brain activity that underpins the computations and representations that evolve over time during the transformation from speech to meaning. Here, we test for the first time the spatiotemporal dynamics of these processes by collecting MEG data while human participants listened to spoken words. By constructing quantitative models of competition and access to meaning in combination with spatiotemporal searchlight representational similarity analysis (Kriegeskorte et al., 2006) in source space, we were able to test where and when these models produced significant effects. We found early transient effects ∼400 ms before the UP of lexical competition in left supramarginal gyrus, left superior temporal gyrus, left middle temporal gyrus (MTG), and left inferior frontal gyrus (IFG) and of semantic competition in MTG, left angular gyrus, and IFG. After the UP, there were no competitive effects, only target-specific semantic effects in angular gyrus and MTG. SIGNIFICANCE STATEMENT: Understanding spoken words involves complex processes that transform the auditory input into a meaningful interpretation. This effortless transition occurs on millisecond timescales, with remarkable speed and accuracy and without any awareness of the complex computations involved. Here, we reveal the real-time neural dynamics of these processes by collecting data about listeners' brain activity as they hear spoken words. Using novel statistical models of different aspects of the recognition process, we can locate directly which parts of the brain are accessing the stored form and meaning of words and how the competition between different word candidates is resolved neurally in real time. This gives us a uniquely differentiated picture of the neural substrate for the first 500 ms of word recognition.


Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Compreensão/fisiologia , Adulto , Mapeamento Encefálico , Eletroencefalografia , Feminino , Humanos , Magnetoencefalografia , Masculino , Desempenho Psicomotor/fisiologia , Reconhecimento Psicológico/fisiologia , Filtro Sensorial/fisiologia , Localização de Som/fisiologia , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Adulto Jovem
16.
Cogn Sci ; 40(2): 325-50, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26043761

RESUMO

Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in (distinctiveness/sharedness) and likelihood of co-occurrence (correlational strength)--determine conceptual activation. To test these claims, we investigated the role of distinctiveness/sharedness and correlational strength in speech-to-meaning mapping, using a lexical decision task and computational simulations. Responses were faster for concepts with higher sharedness, suggesting that shared features are facilitatory in tasks like lexical decision that require access to them. Correlational strength facilitated responses for slower participants, suggesting a time-sensitive co-occurrence-driven settling mechanism. The computational simulation showed similar effects, with early effects of shared features and later effects of correlational strength. These results support a general-to-specific account of conceptual processing, whereby early activation of shared features is followed by the gradual emergence of a specific target representation.


Assuntos
Compreensão/fisiologia , Formação de Conceito/fisiologia , Modelos Teóricos , Fala/fisiologia , Adolescente , Adulto , Simulação por Computador , Humanos , Tempo de Reação , Fatores de Tempo , Adulto Jovem
17.
Cereb Cortex ; 25(10): 3602-12, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25209607

RESUMO

To respond appropriately to objects, we must process visual inputs rapidly and assign them meaning. This involves highly dynamic, interactive neural processes through which information accumulates and cognitive operations are resolved across multiple time scales. However, there is currently no model of object recognition which provides an integrated account of how visual and semantic information emerge over time; therefore, it remains unknown how and when semantic representations are evoked from visual inputs. Here, we test whether a model of individual objects--based on combining the HMax computational model of vision with semantic-feature information--can account for and predict time-varying neural activity recorded with magnetoencephalography. We show that combining HMax and semantic properties provides a better account of neural object representations compared with the HMax alone, both through model fit and classification performance. Our results show that modeling and classifying individual objects is significantly improved by adding semantic-feature information beyond ∼200 ms. These results provide important insights into the functional properties of visual processing across time.


Assuntos
Córtex Cerebral/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Reconhecimento Psicológico/fisiologia , Semântica , Adulto , Formação de Conceito/fisiologia , Feminino , Humanos , Magnetoencefalografia , Masculino , Modelos Neurológicos , Análise de Regressão , Adulto Jovem
18.
Cogn Sci ; 38(4): 638-82, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25019134

RESUMO

Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties.


Assuntos
Processamento de Linguagem Natural , Cognição , Humanos , Julgamento , Idioma , Modelos Teóricos , Semântica
19.
Behav Res Methods ; 46(4): 1119-27, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24356992

RESUMO

Theories of the representation and processing of concepts have been greatly enhanced by models based on information available in semantic property norms. This information relates both to the identity of the features produced in the norms and to their statistical properties. In this article, we introduce a new and large set of property norms that are designed to be a more flexible tool to meet the demands of many different disciplines interested in conceptual knowledge representation, from cognitive psychology to computational linguistics. As well as providing all features listed by 2 or more participants, we also show the considerable linguistic variation that underlies each normalized feature label and the number of participants who generated each variant. Our norms are highly comparable with the largest extant set (McRae, Cree, Seidenberg, & McNorgan, 2005) in terms of the number and distribution of features. In addition, we show how the norms give rise to a coherent category structure. We provide these norms in the hope that the greater detail available in the Centre for Speech, Language and the Brain norms should further promote the development of models of conceptual knowledge. The norms can be downloaded at www.csl.psychol.cam.ac.uk/propertynorms.


Assuntos
Formação de Conceito/classificação , Idioma , Semântica , Fala , Adolescente , Adulto , Feminino , Humanos , Linguística , Masculino , Valores de Referência , Testes de Associação de Palavras , Adulto Jovem
20.
J Neurosci ; 33(48): 18906-16, 2013 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-24285896

RESUMO

Understanding the meanings of words and objects requires the activation of underlying conceptual representations. Semantic representations are often assumed to be coded such that meaning is evoked regardless of the input modality. However, the extent to which meaning is coded in modality-independent or amodal systems remains controversial. We address this issue in a human fMRI study investigating the neural processing of concepts, presented separately as written words and pictures. Activation maps for each individual word and picture were used as input for searchlight-based multivoxel pattern analyses. Representational similarity analysis was used to identify regions correlating with low-level visual models of the words and objects and the semantic category structure common to both. Common semantic category effects for both modalities were found in a left-lateralized network, including left posterior middle temporal gyrus (LpMTG), left angular gyrus, and left intraparietal sulcus (LIPS), in addition to object- and word-specific semantic processing in ventral temporal cortex and more anterior MTG, respectively. To explore differences in representational content across regions and modalities, we developed novel data-driven analyses, based on k-means clustering of searchlight dissimilarity matrices and seeded correlation analysis. These revealed subtle differences in the representations in semantic-sensitive regions, with representations in LIPS being relatively invariant to stimulus modality and representations in LpMTG being uncorrelated across modality. These results suggest that, although both LpMTG and LIPS are involved in semantic processing, only the functional role of LIPS is the same regardless of the visual input, whereas the functional role of LpMTG differs for words and objects.


Assuntos
Percepção de Forma/fisiologia , Leitura , Semântica , Percepção Visual/fisiologia , Adulto , Mapeamento Encefálico , Análise por Conglomerados , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Estimulação Luminosa , Psicolinguística , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...